33 research outputs found

    Code Quality Evaluation Methodology Using The ISO/IEC 9126 Standard

    Full text link
    This work proposes a methodology for source code quality and static behaviour evaluation of a software system, based on the standard ISO/IEC-9126. It uses elements automatically derived from source code enhanced with expert knowledge in the form of quality characteristic rankings, allowing software engineers to assign weights to source code attributes. It is flexible in terms of the set of metrics and source code attributes employed, even in terms of the ISO/IEC-9126 characteristics to be assessed. We applied the methodology to two case studies, involving five open source and one proprietary system. Results demonstrated that the methodology can capture software quality trends and express expert perceptions concerning system quality in a quantitative and systematic manner.Comment: 20 pages, 14 figure

    Proactive Buildings: A Prescriptive Maintenance Approach

    Get PDF
    Prescriptive maintenance has recently attracted a lot of scientific attention. It integrates the advantages of descriptive and predictive analytics to automate the process of detecting non nominal device functionality. Implementing such proactive measures in home or industrial settings may improve equipment dependability and minimize operational expenses. There are several techniques for prescriptive maintenance in diverse use cases, but none elaborates on a general methodology that permits successful prescriptive analysis for small size industrial or residential settings. This study reports on prescriptive analytics, while assessing recent research efforts on multi-domain prescriptive maintenance. Given the existing state of the art, the main contribution of this work is to propose a broad framework for prescriptive maintenance that may be interpreted as a high-level approach for enabling proactive buildings

    Association Rules Mining by Improving the Imperialism Competitive Algorithm (ARMICA)

    No full text
    Part 5: AIRuleBased Modeling (AIRUMO)International audienceMany algorithms have been proposed for Association Rules Mining (ARM), like Apriori. However, such algorithms often have a downside for real word use: they rely on users to set two parameters manually, namely minimum Support and Confidence. In this paper, we propose Association Rules Mining by improving the Imperialism Competitive Algorithm (ARMICA), a novel ARM method, based on the heuristic Imperialism Competitive Algorithm (ICA), for finding frequent itemsets and extracting rules from datasets, whilst setting support automatically. Its structure allows for producing only the strongest and most frequent rules, in contrast to many ARM algorithms, thus alleviating the need to define minimum support and confidence. Experimental results indicate that ARMICA generates accurate rules faster than Apriori

    T3: A Classification Algorithm for Data Mining

    No full text
    Abstract. This paper describes and evaluates T3, an algorithm that builds trees of depth at most three, and results in high accuracy whilst keeping the size of the tree reasonably small. T3 is an improvement over T2 in that it builds larger trees and adopts a less greedy approach. T3 gave better results than both T2 and C4.5 when run against publicly available data sets: T3 decreased classification error on average by 47 % and generalisation error by 29%, compared to T2; and T3 resulted in 46 % smaller trees and 32 % less classification error compared to C4.5. Due to its way of handling unknown values, T3 outperforms C4.5 in generalisation by 99 % to 66%, on a specific medical dataset.

    Expert Maintainers' Strategies and Needs when Understanding Software: A Case Study Approach

    No full text
    No description supplie

    Data Mining Algorithms for Smart Cities: A Bibliometric Analysis

    No full text
    Smart cities connect people and places using innovative technologies such as Data Mining (DM), Machine Learning (ML), big data, and the Internet of Things (IoT). This paper presents a bibliometric analysis to provide a comprehensive overview of studies associated with DM technologies used in smart cities applications. The study aims to identify the main DM techniques used in the context of smart cities and how the research field of DM for smart cities evolves over time. We adopted both qualitative and quantitative methods to explore the topic. We used the Scopus database to find relative articles published in scientific journals. This study covers 197 articles published over the period from 2013 to 2021. For the bibliometric analysis, we used the Biliometrix library, developed in R. Our findings show that there is a wide range of DM technologies used in every layer of a smart city project. Several ML algorithms, supervised or unsupervised, are adopted for operating the instrumentation, middleware, and application layer. The bibliometric analysis shows that DM for smart cities is a fast-growing scientific field. Scientists from all over the world show a great interest in researching and collaborating on this interdisciplinary scientific field

    Scoring and summarising gene product clusters using the Gene Ontology

    No full text
    We propose an approach for quantifying the biological relatedness between gene products, based on their properties, and measure their similarities using exclusively statistical NLP techniques and Gene Ontology (GO) annotations. We also present a novel similarity figure of merit, based on the vector space model, which assesses gene expression analysis results and scores gene product clusters’ biological coherency, making sole use of their annotation terms and textual descriptions. We define query profiles which rapidly detect a gene product cluster’s dominant biological properties. Experimental results validate our approach, and illustrate a strong correlation between our coherency score and gene expression patterns
    corecore